1.Software Description

1.1 Mind+ Introduction

Mind+ is a youth programming software based on Scratch 3.0. It supports graphical and code-based programming for various open-source hardware such as Arduino, UNIHIKER K10, and UNIHIKER M10. You can complete programming simply by dragging graphical program blocks, or use high-level programming languages like Python/C/C++, allowing everyone to easily experience the joy of creation.

Mind+ official website download link: https://www.dfrobot.com/product-2691.html

1.2 UNIHIKER M10 Introduction

UNIHIKER M10 is a highly integrated domestic open-source teaching hardware (with independent intellectual property rights) designed for K-12 teachers and students, adapting to the new curriculum's interdisciplinary teaching requirements for subjects like Information Technology, Physics, and Biology. It integrates a single-board computer (4-core CPU / 512MB RAM / 16GB Storage), Linux system, a complete Python environment with pre-installed common Python libraries, and also comes with a 2.8-inch color touchscreen and rich sensors. It provides a Python teaching platform that can be started in just two steps.

UNIHIKER M10 store purchase link: buy now

2.Hardware Connection

Materials

  • Hardware
    • UNIHIKER M10 x 1
    • HUSKYLENS 2 x 1
    • Type-C USB Cable x2
    • Double Sided PH2.0-4P white 20cm silicone wire x1

Prepare two Type-C USB Cables and one 4Double Sided PH2.0-4P white 20cm silicone wire. Use oneType-C USB Cable to connect the computer to the UNIHIKER M10. Use theDouble Sided PH2.0-4P white 20cm silicone wire to connect the UNIHIKER M10 to the HuskyLens 2. Then, use the additional Type-C USB Cable to connect the HUSKYLENS 2's Type-C interface to a power source for extra power supply to the HUSKYLENS 2. The wiring diagram can be referenced below.

Interface Diagram

3.Load Huskylens 2 Library

3.1 Load Huskylens 2 Library in Mind+ 2.0

Double-click to open the Mind+ 2.0 programming software. After opening, click to select "Python Block Mode", as shown in the figure below.

Interface Diagram

Wait for the Python Block Mode to initialize. Click 'Extension' in the bottom left corner. If using for the first time, you need to download the corresponding controller extension library. In the top right corner of the extension library page, search for "Unihiker M10". Click the download button in the top right corner of the "UNIHIKER M10" image to download the main controller library.

Interface Diagram

After the UNIHIKER M10 extension library is downloaded, click to load it. Once loaded successfully, it will appear as shown in the figure below.

Interface Diagram

In the top right corner of the extension library page, search for "HUSKYLENS 2". Click the download button in the top right corner of the "HuskyLens 2" image to download the corresponding library.

Interface Diagram

After downloading, click the "HuskyLens 2 AI Camera" to load this extension library. The completed loading is shown below.

Interface Diagram

3.2 Load HUSKYLENS 2 Library in Mind+ 1.8 Series Library

Open the Mind+ programming software and click in the top right corner to switch to Python mode.

Interface Diagram

Click 'Extension' in the bottom left corner. In the 'Official Library', select 'UNIHIKER M10' and click the image to load this library.

Interface Diagram

Upon successful loading, the following will be displayed.

Interface Diagram

After the UNIHIKER M10 is loaded, continue to the 'User Library' to add the HuskyLens 2 library.

Interface Diagram

Click the HuskyLens 2 library image above to load it. After successful loading, it will look as shown. Finally, click "Back" in the top left corner to return to the programming interface.

Interface Diagram

Click 'Code' in the top-left corner to switch to the code editing page.

Interface Diagram

The following visual recognition functionality is based on the HUSKYLENS 2 firmware version 1.0.

The current latest system version is 1.1.6. Find System Settings > Device Information to check your current system version. We recommend updating to the latest version to experience new features. Update tutorial: Click here

Interface Diagram

In addition to Mind+, you can also use other Python IDEs for programming and project development. For how to load the huskyylens library in a Python IDE, please refer to:
Huskylens 2 library instruction

4.Face Recognition Blocks

4.1 Face Recognition - Output Relevant Data

Under the Face Recognition function, when a face appears on the HuskyLens 2 screen, it can be detected and framed, allowing the acquisition of the total number of faces detected and related data for a specified face.

Example Program:

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(ALGORITHM_FACE_RECOGNITION)
line1=u_gui.draw_text(text="Total faces:",x=0,y=0,font_size=20, color="#0000FF")
line2=u_gui.draw_text(text="Center face:",x=0,y=40,font_size=20, color="#0000FF")
line3=u_gui.draw_text(text="First face ID:",x=0,y=80,font_size=20, color="#0000FF")
while True:
    huskylens.getResult(ALGORITHM_FACE_RECOGNITION)
    if (huskylens.available(ALGORITHM_FACE_RECOGNITION)):
        line1.config(text=(str("Total faces:") + str((huskylens.getCachedResultNum(ALGORITHM_FACE_RECOGNITION)))))
        line2.config(text=(str("Center face:") + str((huskylens.getCachedCenterResult(ALGORITHM_FACE_RECOGNITION).ID if huskylens.getCachedCenterResult(ALGORITHM_FACE_RECOGNITION) else -1))))
        line3.config(text=(str("First face ID:") + str(huskylens.getCachedResultByIndex(ALGORITHM_FACE_RECOGNITION, 1-1).ID if huskylens.getCachedResultByIndex(ALGORITHM_FACE_RECOGNITION, 1-1) else -1)))

Click 'Run' in Mind+ and wait for the program to run.

Point the HuskyLens 2 camera at the face in the frame to learn it. For detailed instructions on how to learn faces, refer to: HUSKYLENS 2 WIKI.

Once learning is complete, aim at the learned face and observe the output results on the M10 screen.

Output Result: As shown below, you can obtain the total number of faces detected in the frame, whether or not the face has been learned; you can specify the face ID near the center of the camera frame, and the ID of the first detected face (faces not learned will have an ID of 0).

Interface Diagram

4.2 Get Detailed Data of a Specific Face in the Frame

It can acquire facial feature and position data of a specified face. The readable face data includes: Face ID, Face Name, Width, Height, X and Y coordinates of the face's center point, X/Y coordinates of the left/right eye, X/Y coordinates of the left/right corner of the mouth, and X/Y coordinates of the nose.

The example program below shows how to get the facial feature data of the face closest to the center of the camera frame. This data can also be obtained for unlearned faces.

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(ALGORITHM_FACE_RECOGNITION)
line1=u_gui.draw_text(text="Center face ID:",x=0,y=0,font_size=15, color="#0000FF")
line2=u_gui.draw_text(text="Leye coords:",x=0,y=40,font_size=15, color="#0000FF")
line3=u_gui.draw_text(text="Reye coords:",x=0,y=80,font_size=15, color="#0000FF")
line4=u_gui.draw_text(text="Lmouth coords:",x=0,y=120,font_size=15, color="#0000FF")
line5=u_gui.draw_text(text="Rmouth coords:",x=0,y=160,font_size=15, color="#0000FF")
line6=u_gui.draw_text(text="Nose coords",x=0,y=200,font_size=15, color="#0000FF")
while True:
    huskylens.getResult(ALGORITHM_FACE_RECOGNITION)
    if (huskylens.available(ALGORITHM_FACE_RECOGNITION)):
        line1.config(text=(str("Center face ID:") + str((huskylens.getCachedCenterResult(ALGORITHM_FACE_RECOGNITION).ID if huskylens.getCachedCenterResult(ALGORITHM_FACE_RECOGNITION) else -1))))
        line2.config(text=(str("Leye coords:") + str((str((huskylens.getCachedCenterResult(ALGORITHM_FACE_RECOGNITION).leye_x if huskylens.getCachedCenterResult(ALGORITHM_FACE_RECOGNITION) else -1)) + str((str(",") + str((huskylens.getCachedCenterResult(ALGORITHM_FACE_RECOGNITION).leye_y if huskylens.getCachedCenterResult(ALGORITHM_FACE_RECOGNITION) else -1))))))))
        line3.config(text=(str("Reye coords:") + str((str((huskylens.getCachedCenterResult(ALGORITHM_FACE_RECOGNITION).reye_x if huskylens.getCachedCenterResult(ALGORITHM_FACE_RECOGNITION) else -1)) + str((str(",") + str((huskylens.getCachedCenterResult(ALGORITHM_FACE_RECOGNITION).reye_y if huskylens.getCachedCenterResult(ALGORITHM_FACE_RECOGNITION) else -1))))))))
        line4.config(text=(str("Lmouth coords:") + str((str((huskylens.getCachedCenterResult(ALGORITHM_FACE_RECOGNITION).lmouth_x if huskylens.getCachedCenterResult(ALGORITHM_FACE_RECOGNITION) else -1)) + str((str(",") + str((huskylens.getCachedCenterResult(ALGORITHM_FACE_RECOGNITION).lmouth_y if huskylens.getCachedCenterResult(ALGORITHM_FACE_RECOGNITION) else -1))))))))
        line5.config(text=(str("Rmouth coords:") + str((str((huskylens.getCachedCenterResult(ALGORITHM_FACE_RECOGNITION).rmouth_x if huskylens.getCachedCenterResult(ALGORITHM_FACE_RECOGNITION) else -1)) + str((str(",") + str((huskylens.getCachedCenterResult(ALGORITHM_FACE_RECOGNITION).rmouth_y if huskylens.getCachedCenterResult(ALGORITHM_FACE_RECOGNITION) else -1))))))))
        line6.config(text=(str("Nose coords") + str((str((huskylens.getCachedCenterResult(ALGORITHM_FACE_RECOGNITION).nose_x if huskylens.getCachedCenterResult(ALGORITHM_FACE_RECOGNITION) else -1)) + str((str(",") + str((huskylens.getCachedCenterResult(ALGORITHM_FACE_RECOGNITION).nose_y if huskylens.getCachedCenterResult(ALGORITHM_FACE_RECOGNITION) else -1))))))))

Output Result: As shown below, after running the program, the M10 screen displays the Face ID and the facial feature coordinate data for that face. Since this face has not been learned, the Face ID is 0.

Interface Diagram

In addition to the data mentioned above, more face data can be acquired, such as the total count of a specific face in the frame, the name of that face, and data related to the first instance of that face. (This data can also be obtained for unlearned faces).

Taking a learned face as an example, the example program is as follows:

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(ALGORITHM_FACE_RECOGNITION)
line1=u_gui.draw_text(text="Total faces ID1:",x=0,y=0,font_size=20, color="#0000FF")
line2=u_gui.draw_text(text="Face ID1 name:",x=0,y=40,font_size=20, color="#0000FF")
line3=u_gui.draw_text(text="First face ID:",x=0,y=80,font_size=20, color="#0000FF")
line4=u_gui.draw_text(text="Coords:",x=0,y=120,font_size=20, color="#0000FF")
while True:
    huskylens.getResult(ALGORITHM_FACE_RECOGNITION)
    if (huskylens.available(ALGORITHM_FACE_RECOGNITION)):
        line1.config(text=(str("Total faces ID1:") + str((huskylens.getCachedResultNumByID(ALGORITHM_FACE_RECOGNITION, 1)))))
        line2.config(text=(str("Face ID1 name:") + str((huskylens.getCachedResultByID(ALGORITHM_FACE_RECOGNITION, 1).name if huskylens.getCachedResultByID(ALGORITHM_FACE_RECOGNITION, 1) else -1))))
        line4.config(text=(str("Coords:") + str((str((huskylens.getCachedIndexResultByID(ALGORITHM_FACE_RECOGNITION, 1, 1-1).xCenter if huskylens.getCachedIndexResultByID(ALGORITHM_FACE_RECOGNITION, 1, 1-1) else -1)) + str((str(",") + str((huskylens.getCachedIndexResultByID(ALGORITHM_FACE_RECOGNITION, 1, 1-1).yCenter if huskylens.getCachedIndexResultByID(ALGORITHM_FACE_RECOGNITION, 1, 1-1) else -1))))))))

**Output Result: **As shown below, after running the program, the M10 screen displays the total count of Face ID 1 in the frame, the name of that face, and the center XY coordinates of the first detected Face ID 1. (The default face name is "Face". For instructions on setting a name, see the Face Recognition function description - Set Name.)

Interface Diagram

5.Object Recognition Blocks

5.1 Object Recognition - Output Relevant Data

It can recognize objects within the HuskyLens 2 field of view (must be among the 80 fixed categories of recognizable objects, see Object Recognition function description for details), and acquire object-related data. Readable data includes: the total number of recognizable objects in the frame, the ID of the object closest to the center of the HuskyLens 2 camera frame, and the first detected object.

Example Program:

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(ALGORITHM_OBJECT_RECOGNITION)
line1=u_gui.draw_text(text="Total objects:",x=0,y=0,font_size=15, color="#0000FF")
line2=u_gui.draw_text(text="Learned object count:",x=0,y=40,font_size=15, color="#0000FF")
line3=u_gui.draw_text(text="Center object:",x=0,y=80,font_size=15, color="#0000FF")
line4=u_gui.draw_text(text="First objects ID:",x=0,y=120,font_size=15, color="#0000FF")
while True:
    huskylens.getResult(ALGORITHM_OBJECT_RECOGNITION)
    if (huskylens.available(ALGORITHM_OBJECT_RECOGNITION)):
        line1.config(text=(str("Total objects:") + str((huskylens.getCachedResultNum(ALGORITHM_OBJECT_RECOGNITION)))))
        line2.config(text=(str("Learned object count:") + str((huskylens.getCachedResultMaxID(ALGORITHM_OBJECT_RECOGNITION)))))
        line3.config(text=(str("Center object:") + str((huskylens.getCachedCenterResult(ALGORITHM_OBJECT_RECOGNITION).ID if huskylens.getCachedCenterResult(ALGORITHM_OBJECT_RECOGNITION) else -1))))
        line4.config(text=(str("First objects ID:") + str((huskylens.getCachedResultByIndex(ALGORITHM_OBJECT_RECOGNITION, 1 - 1).ID if huskylens.getCachedResultByIndex(ALGORITHM_OBJECT_RECOGNITION, 1 - 1) else -1))))

Click 'Run' in Mind+ and wait for the program to upload. HuskyLens 2 will automatically enter the Object Recognition function. Point the HuskyLens 2 camera at the objects in the frame to learn them. For detailed instructions on how to learn objects, refer to: HUSKYLENS 2 WIKI

Once learning is complete, aim at the object and observe the output results on the screen.

Output Result: It can output corresponding data as required. For example, the first line outputs the total number of detected objects, and the second line outputs the total number of learned objects. The third and fourth lines output the specified gesture ID. Learned gestures are assigned IDs in learning order, while unlearned gestures have an ID of 0.

Interface Diagram

5.2 Get Data for a Specific Object in the Frame

After HuskyLens 2 recognizes an object, it can acquire data for a specific object in the frame. For example, it can determine if a specific object is in the frame, get the name of a specific object, get the count of similar objects in the frame, and when multiple similar objects appear, it can be set to acquire parameters for a specific one, including its Name, X/Y coordinates, Width, and Height.

Example Program:

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(ALGORITHM_OBJECT_RECOGNITION)
line1=u_gui.draw_text(text="ID2 count:",x=0,y=0,font_size=15, color="#0000FF")
line2=u_gui.draw_text(text="ID2 name:",x=0,y=40,font_size=15, color="#0000FF")
line3=u_gui.draw_text(text="First ID2",x=0,y=80,font_size=15, color="#0000FF")
line4=u_gui.draw_text(text="Coords:",x=0,y=120,font_size=15, color="#0000FF")
while True:
    huskylens.getResult(ALGORITHM_OBJECT_RECOGNITION)
    if (huskylens.available(ALGORITHM_OBJECT_RECOGNITION)):
        line1.config(text=(str("ID2 count:") + str((huskylens.getCachedResultNumByID(ALGORITHM_OBJECT_RECOGNITION, 2)))))
        line2.config(text=(str("ID2 name:") + str((huskylens.getCachedResultByID(ALGORITHM_OBJECT_RECOGNITION, 2).name if huskylens.getCachedResultByID(ALGORITHM_OBJECT_RECOGNITION, 2) else -1))))
        line3.config(text="First ID2")
        line4.config(text=(str("Coords:") + str((str((huskylens.getCachedIndexResultByID(ALGORITHM_OBJECT_RECOGNITION, 2, 1-1).xCenter if huskylens.getCachedIndexResultByID(ALGORITHM_OBJECT_RECOGNITION, 2, 1-1) else -1)) + str((str(",") + str((huskylens.getCachedIndexResultByID(ALGORITHM_OBJECT_RECOGNITION, 2, 1-1).yCenter if huskylens.getCachedIndexResultByID(ALGORITHM_OBJECT_RECOGNITION, 2, 1-1) else -1))))))))

Output Result: As shown, it can acquire the total number of objects in the frame, the count and name of ID 2 objects, and the coordinate position of the first detected ID 2 object.

Interface Diagram

6.Object Tracking Blocks

6.1 Object Tracking - Output Relevant Data

When HUSKYLENS 2 detects a trackable target object, it can acquire relevant tracking data. The data that can be acquired includes: the object's ID, Name, XY coordinates, Width, and Height.

Example Program:

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(ALGORITHM_OBJECT_TRACKING)
line1=u_gui.draw_text(text="Objct ID:",x=0,y=0,font_size=15, color="#0000FF")
line2=u_gui.draw_text(text="Object name:",x=0,y=40,font_size=15, color="#0000FF")
line3=u_gui.draw_text(text="Object coords:",x=0,y=80,font_size=15, color="#0000FF")
line4=u_gui.draw_text(text="Object width & height:",x=0,y=120,font_size=15, color="#0000FF")
while True:
    huskylens.getResult(ALGORITHM_OBJECT_TRACKING)
    if (huskylens.available(ALGORITHM_OBJECT_TRACKING)):
        line1.config(text=(str("Objct ID:") + str((huskylens.getCachedCenterResult(ALGORITHM_OBJECT_TRACKING).ID if huskylens.getCachedCenterResult(ALGORITHM_OBJECT_TRACKING) else -1))))
        line2.config(text=(str("Object name:") + str((huskylens.getCachedCenterResult(ALGORITHM_OBJECT_TRACKING).name if huskylens.getCachedCenterResult(ALGORITHM_OBJECT_TRACKING) else -1))))
        line3.config(text=(str("Object coords:") + str((str((huskylens.getCachedCenterResult(ALGORITHM_OBJECT_TRACKING).xCenter if huskylens.getCachedCenterResult(ALGORITHM_OBJECT_TRACKING) else -1)) + str((str(",") + str((huskylens.getCachedCenterResult(ALGORITHM_OBJECT_TRACKING).yCenter if huskylens.getCachedCenterResult(ALGORITHM_OBJECT_TRACKING) else -1))))))))
        line4.config(text=(str("Object width & height:") + str((str((huskylens.getCachedCenterResult(ALGORITHM_OBJECT_TRACKING).width if huskylens.getCachedCenterResult(ALGORITHM_OBJECT_TRACKING) else -1)) + str((str(",") + str((huskylens.getCachedCenterResult(ALGORITHM_OBJECT_TRACKING).height if huskylens.getCachedCenterResult(ALGORITHM_OBJECT_TRACKING) else -1))))))))

Click 'Run' in Mind+ and wait for the program to upload.

Point the HUSKYLENS 2 camera at the object to be tracked (the target object must be framed first, see HUSKYLENS 2 WIKI

After framing is complete, aim at the target object to observe the output results.

Output Result: It can output the tracked object's ID, Name, XY coordinates, Width, and Height. The default object name is "Object" .

Interface Diagram

7.Color Recognition Blocks

7.1 Color Recognition - Output Relevant Data

It can recognize color blocks within the HUSKYLENS 2 field of view and output color block-related data. Readable data includes: the total number of detected color blocks, the total number of learned color blocks, the ID of the color block closest to the center of the HUSKYLENS 2 camera frame, and the ID of the first detected color block.

Example Program:

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(ALGORITHM_FACE_RECOGNITION)
line1=u_gui.draw_text(text="Total faces:",x=0,y=0,font_size=20, color="#0000FF")
line2=u_gui.draw_text(text="Center face:",x=0,y=40,font_size=20, color="#0000FF")
line3=u_gui.draw_text(text="First face ID:",x=0,y=80,font_size=20, color="#0000FF")
while True:
    huskylens.getResult(ALGORITHM_FACE_RECOGNITION)
    if (huskylens.available(ALGORITHM_FACE_RECOGNITION)):
        line1.config(text=(str("Total faces:") + str((huskylens.getCachedResultNum(ALGORITHM_FACE_RECOGNITION)))))
        line2.config(text=(str("Center face:") + str((huskylens.getCachedCenterResult(ALGORITHM_FACE_RECOGNITION).ID if huskylens.getCachedCenterResult(ALGORITHM_FACE_RECOGNITION) else -1))))
        line3.config(text=(str("First face ID:") + str(huskylens.getCachedResultByIndex(ALGORITHM_FACE_RECOGNITION, 1-1).ID if huskylens.getCachedResultByIndex(ALGORITHM_FACE_RECOGNITION, 1-1) else -1)))

Click 'Run' in Mind+ and wait for the program to upload.

Point the HUSKYLENS 2's crosshair at a color block to learn it. For detailed instructions on how to learn colors, refer to: HUSKYLENS 2 WIKI

Once learning is complete, aim the HUSKYLENS 2 camera at the color blocks and observe the output results.

**Output Result: **It can output the total number of detected color blocks. All color blocks framed by a box will be counted, whether they have been learned or not. It can output corresponding data as required, such as the second line, which outputs the total number of learned color blocks. The third and fourth lines output the specified color block ID. The color block near the center of the frame is outlined in a white box, indicating it is an unlearned color block, so the "ID of color block near center" is 0.

Interface Diagram

7.2 Get Data for a Specific Color

After HUSKYLENS 2 recognizes a color, it can acquire data for a specific color in the frame. For example, it can determine if a specific color is in the frame, get the name of a specific color, or get the count of color blocks of the same specific color in the frame. When multiple color blocks of the same color appear, it can be set to acquire parameters for a specific one, including its Name, X/Y coordinates, Width, and Height.

Example Program:

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(ALGORITHM_COLOR_RECOGNITION)
line1=u_gui.draw_text(text="Color ID1 count:",x=0,y=0,font_size=15, color="#0000FF")
line2=u_gui.draw_text(text="Color ID1 name:",x=0,y=40,font_size=15, color="#0000FF")
line3=u_gui.draw_text(text="First color  ID1",x=0,y=80,font_size=15, color="#0000FF")
line4=u_gui.draw_text(text="Coords:",x=0,y=120,font_size=15, color="#0000FF")
while True:
    huskylens.getResult(ALGORITHM_COLOR_RECOGNITION)
    if (huskylens.available(ALGORITHM_COLOR_RECOGNITION)):
        if ((huskylens.getCachedResultByID(ALGORITHM_COLOR_RECOGNITION, 1) is not None)):
            line1.config(text=(str("Color ID1 count:") + str((huskylens.getCachedResultNumByID(ALGORITHM_COLOR_RECOGNITION, 1)))))
            line2.config(text=(str("Color ID1 name:") + str((huskylens.getCachedResultByID(ALGORITHM_COLOR_RECOGNITION, 1).name if huskylens.getCachedResultByID(ALGORITHM_COLOR_RECOGNITION, 1) else -1))))
            line4.config(text=(str("Coords:") + str((str((huskylens.getCachedIndexResultByID(ALGORITHM_COLOR_RECOGNITION, 1, 1-1).xCenter if huskylens.getCachedIndexResultByID(ALGORITHM_COLOR_RECOGNITION, 1, 1-1) else -1)) + str((str(",") + str((huskylens.getCachedIndexResultByID(ALGORITHM_COLOR_RECOGNITION, 1, 1-1).yCenter if huskylens.getCachedIndexResultByID(ALGORITHM_COLOR_RECOGNITION, 1, 1-1) else -1))))))))

Output Result: As shown, it can acquire the total count and name of color block ID 1 in the frame, and the coordinate position of the first detected ID 1 color block. (Color names can be customized; the default is "Color".

Interface Diagram

8.Object Classification

8.1 Recognize Learned Objects and Output ID/Name

Under the Object Classification function, HUSKYLENS 2 can classify 1000 fixed categories of objects. The following example program can be used to get the corresponding ID and name of the recognized learned object.

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(ALGORITHM_OBJECT_CLASSIFICATION)
line1=u_gui.draw_text(text="",x=0,y=0,font_size=16, color="#0000FF")
line2=u_gui.draw_text(text="",x=0,y=40,font_size=16, color="#0000FF")
while True:
    huskylens.getResult(ALGORITHM_OBJECT_CLASSIFICATION)
    if (huskylens.available(ALGORITHM_OBJECT_CLASSIFICATION)):
        line1.config(text=(str("Object ID:") + str((huskylens.getCachedCenterResult(ALGORITHM_OBJECT_CLASSIFICATION).classID if huskylens.getCachedCenterResult(ALGORITHM_OBJECT_CLASSIFICATION) else -1))))
        line2.config(text=(str("Object name:") + str((huskylens.getCachedCenterResult(ALGORITHM_OBJECT_CLASSIFICATION).name if huskylens.getCachedCenterResult(ALGORITHM_OBJECT_CLASSIFICATION) else -1))))

Click 'Run' in Mind+ and wait for the program to upload.

After HUSKYLENS 2 finishes learning the object, point the HUSKYLENS 2 camera at the learned object and observe the output results. For detailed instructions on how to learn objects, refer to: HUSKYLENS 2 WIKI

Output Result: As shown below, when a learned object appears in the frame, its name, ID, and confidence level will be displayed (confidence level cannot currently be retrieved via program).

Interface Diagram

9.Self-learning Classification Blocks

9.1 Recognize Learned Objects and Output ID/Name

Under the self-learning classification function, after an object has been learned, HUSKYLENS 2 can recognize it when it sees it again. The following example program can be used to get the corresponding ID and name of the recognized learned object.

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(ALGORITHM_SELF_LEARNING_CLASSIFICATION)
line1=u_gui.draw_text(text="",x=0,y=0,font_size=16, color="#0000FF")
line2=u_gui.draw_text(text="",x=0,y=40,font_size=16, color="#0000FF")
while True:
    huskylens.getResult(ALGORITHM_SELF_LEARNING_CLASSIFICATION)
    if (huskylens.available(ALGORITHM_SELF_LEARNING_CLASSIFICATION)):
        line1.config(text=(str("Object ID:") + str((huskylens.getCachedCenterResult(ALGORITHM_SELF_LEARNING_CLASSIFICATION).ID if huskylens.getCachedCenterResult(ALGORITHM_SELF_LEARNING_CLASSIFICATION) else -1))))
        line2.config(text=(str("Object name:") + str((huskylens.getCachedCenterResult(ALGORITHM_SELF_LEARNING_CLASSIFICATION).name if huskylens.getCachedCenterResult(ALGORITHM_SELF_LEARNING_CLASSIFICATION) else -1))))

Click 'Run' in Mind+ and wait for the program to upload.

After HUSKYLENS 2 finishes learning the object, point the HUSKYLENS 2 camera at the learned object and observe the output results. For detailed instructions on how to learn objects, refer to: HUSKYLENS 2 WIKI

Output Result: As shown below, when a learned object appears in the frame, it will be framed, and its name, ID, and confidence level will be displayed. If the object's name has not been set, the default output name is: Object. For how to set the name, please see HUSKYLENS 2 WIKI

Interface Diagram

10.Hand Gesture Recognition Blocks

10.1 Hand Gesture Recognition - Output Relevant Data

It can detect hand gestures within the HuskyLens 2 field of view and acquire gesture-related data. Readable data includes: the total number of gestures detected in the frame, the ID of the gesture closest to the center of the HuskyLens 2 camera frame, and the ID of the first detected gesture.

Example Program:

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(ALGORITHM_HAND_RECOGNITION)
line1=u_gui.draw_text(text="Hand ID1 count:",x=0,y=0,font_size=15, color="#0000FF")
line2=u_gui.draw_text(text="ID1 name:",x=0,y=40,font_size=15, color="#0000FF")
line3=u_gui.draw_text(text="First ID1 hand",x=0,y=80,font_size=15, color="#0000FF")
line4=u_gui.draw_text(text="Coords:",x=0,y=120,font_size=15, color="#0000FF")
while True:
    huskylens.getResult(ALGORITHM_HAND_RECOGNITION)
    if (huskylens.available(ALGORITHM_HAND_RECOGNITION)):
        line1.config(text=(str("Hand ID1 count:") + str((huskylens.getCachedResultNumByID(ALGORITHM_HAND_RECOGNITION, 1)))))
        line2.config(text=(str("ID1 name:") + str((huskylens.getCachedResultByID(ALGORITHM_HAND_RECOGNITION, 1).name if huskylens.getCachedResultByID(ALGORITHM_HAND_RECOGNITION, 1) else -1))))
        line4.config(text=(str("Coords:") + str((str((huskylens.getCachedIndexResultByID(ALGORITHM_HAND_RECOGNITION, 1, 1-1).xCenter if huskylens.getCachedIndexResultByID(ALGORITHM_HAND_RECOGNITION, 1, 1-1) else -1)) + str((str(",") + str((huskylens.getCachedIndexResultByID(ALGORITHM_HAND_RECOGNITION, 1, 1-1).yCenter if huskylens.getCachedIndexResultByID(ALGORITHM_HAND_RECOGNITION, 1, 1-1) else -1))))))))

Click 'Run' in Mind+ and wait for the program to upload. HuskyLens 2 will enter the Hand Gesture Recognition function. Point the HuskyLens 2 camera at the gesture in the frame to learn it. For detailed instructions on how to learn gestures, refer to: HUSKYLENS 2 WIKI

Once learning is complete, aim at the gesture and observe the output results on the screen.

Output Result: It can output the total number of detected gestures. All detected gestures will be counted (i.e., framed by a box), whether they have been learned or not. It can output corresponding data as required, such as the second line, which outputs the total number of learned gestures. The third and fourth lines output the specified gesture ID. Learned gestures are assigned IDs in learning order, while unlearned gestures have an ID of 0.

Interface Diagram

10.2 Get Data for a Specific Gesture in the Frame

It can acquire keypoint data for a specific gesture, such as its ID, Name, fingers, and wrist. Detailed data includes: Gesture ID, Gesture Name, X and Y coordinates of the gesture's center point, Width, Height, Wrist X/Y coordinates, and the X/Y coordinates for the base, joint, and tip of each finger. For details, please see the Hand Gesture Recognition Blocks Description.

The example program below shows how to get the X/Y coordinate data for the wrist and the tips of all five fingers for the gesture closest to the center of the camera frame. This data can also be obtained for unlearned gestures.

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(ALGORITHM_HAND_RECOGNITION)
line1=u_gui.draw_text(text="Center hand ID:",x=0,y=0,font_size=10, color="#0000FF")
line2=u_gui.draw_text(text="Wrist coords:",x=0,y=40,font_size=10, color="#0000FF")
line3=u_gui.draw_text(text="Thumb tip coords:",x=0,y=80,font_size=10, color="#0000FF")
line4=u_gui.draw_text(text="Index finger tip coords:",x=0,y=120,font_size=10, color="#0000FF")
line5=u_gui.draw_text(text="Middle finger tips coords:",x=0,y=160,font_size=10, color="#0000FF")
line6=u_gui.draw_text(text="Ring finger tips coords",x=0,y=200,font_size=10, color="#0000FF")
line7=u_gui.draw_text(text="Pinky finger tip coords:",x=0,y=240,font_size=10, color="#0000FF")
while True:
    huskylens.getResult(ALGORITHM_HAND_RECOGNITION)
    if (huskylens.available(ALGORITHM_HAND_RECOGNITION)):
        line1.config(text=(str("Center hand ID:") + str((huskylens.getCachedCenterResult(ALGORITHM_HAND_RECOGNITION).ID if huskylens.getCachedCenterResult(ALGORITHM_HAND_RECOGNITION) else -1))))
        line2.config(text=(str("Wrist coords:") + str((str((huskylens.getCachedCenterResult(ALGORITHM_HAND_RECOGNITION).wrist_x if huskylens.getCachedCenterResult(ALGORITHM_HAND_RECOGNITION) else -1)) + str((str(",") + str((huskylens.getCachedCenterResult(ALGORITHM_HAND_RECOGNITION).wrist_y if huskylens.getCachedCenterResult(ALGORITHM_HAND_RECOGNITION) else -1))))))))
        line3.config(text=(str("Thumb tip coords:") + str((str((huskylens.getCachedCenterResult(ALGORITHM_HAND_RECOGNITION).thumb_tip_x if huskylens.getCachedCenterResult(ALGORITHM_HAND_RECOGNITION) else -1)) + str((str(",") + str((huskylens.getCachedCenterResult(ALGORITHM_HAND_RECOGNITION).thumb_tip_y if huskylens.getCachedCenterResult(ALGORITHM_HAND_RECOGNITION) else -1))))))))
        line4.config(text=(str("Index finger tip coords:") + str((str((huskylens.getCachedCenterResult(ALGORITHM_HAND_RECOGNITION).index_finger_tip_x if huskylens.getCachedCenterResult(ALGORITHM_HAND_RECOGNITION) else -1)) + str((str(",") + str((huskylens.getCachedCenterResult(ALGORITHM_HAND_RECOGNITION).index_finger_tip_y if huskylens.getCachedCenterResult(ALGORITHM_HAND_RECOGNITION) else -1))))))))
        line5.config(text=(str("Middle finger tips coords:") + str((str((huskylens.getCachedCenterResult(ALGORITHM_HAND_RECOGNITION).middle_finger_tip_x if huskylens.getCachedCenterResult(ALGORITHM_HAND_RECOGNITION) else -1)) + str((str(",") + str((huskylens.getCachedCenterResult(ALGORITHM_HAND_RECOGNITION).middle_finger_tip_y if huskylens.getCachedCenterResult(ALGORITHM_HAND_RECOGNITION) else -1))))))))
        line6.config(text=(str("Ring finger tips coords") + str((str((huskylens.getCachedCenterResult(ALGORITHM_HAND_RECOGNITION).ring_finger_tip_x if huskylens.getCachedCenterResult(ALGORITHM_HAND_RECOGNITION) else -1)) + str((str(",") + str((huskylens.getCachedCenterResult(ALGORITHM_HAND_RECOGNITION).ring_finger_tip_y if huskylens.getCachedCenterResult(ALGORITHM_HAND_RECOGNITION) else -1))))))))
        line7.config(text=(str("Pinky finger tip coords:") + str((str((huskylens.getCachedCenterResult(ALGORITHM_HAND_RECOGNITION).pinky_finger_tip_x if huskylens.getCachedCenterResult(ALGORITHM_HAND_RECOGNITION) else -1)) + str((str(",") + str((huskylens.getCachedCenterResult(ALGORITHM_HAND_RECOGNITION).pinky_finger_tip_y if huskylens.getCachedCenterResult(ALGORITHM_HAND_RECOGNITION) else -1))))))))

**Output Result: **As shown below, after running the program, the screen displays the Gesture ID and the keypoint data for that gesture. Since this gesture has not been learned, the Gesture ID is 0.

Interface Diagram

In addition to the data above, more data for a specific gesture can be acquired. For example, determining if a specific gesture is in the frame, the name of a specific gesture, or the count of identical gestures in the frame. When multiple identical gestures appear, it's possible to specify and acquire parameters for one of them, including Name, gesture center X/Y coordinates, Width, Height, fingertip coordinates, wrist coordinates, and more.

Example Program:

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(ALGORITHM_HAND_RECOGNITION)
line1=u_gui.draw_text(text="Hand ID1 count:",x=0,y=0,font_size=10, color="#0000FF")
line2=u_gui.draw_text(text="ID1 name:",x=0,y=40,font_size=10, color="#0000FF")
line3=u_gui.draw_text(text="First ID1 hand:",x=0,y=80,font_size=10, color="#0000FF")
line4=u_gui.draw_text(text="Coords:",x=0,y=120,font_size=10, color="#0000FF")
while True:
    huskylens.getResult(ALGORITHM_HAND_RECOGNITION)
    if (huskylens.available(ALGORITHM_HAND_RECOGNITION)):
        line1.config(text=(str("Hand ID1 count:") + str((huskylens.getCachedResultNumByID(ALGORITHM_HAND_RECOGNITION, 1)))))
        line2.config(text=(str("ID1 name:") + str((str((huskylens.getCachedResultByID(ALGORITHM_HAND_RECOGNITION, 1).name if huskylens.getCachedResultByID(ALGORITHM_HAND_RECOGNITION, 1) else -1)) + str("banana")))))
        line4.config(text=(str("Coords:") + str((str((huskylens.getCachedIndexResultByID(ALGORITHM_HAND_RECOGNITION, 1, 1-1).xCenter if huskylens.getCachedIndexResultByID(ALGORITHM_HAND_RECOGNITION, 1, 1-1) else -1)) + str((str(",") + str((huskylens.getCachedIndexResultByID(ALGORITHM_HAND_RECOGNITION, 1, 1-1).yCenter if huskylens.getCachedIndexResultByID(ALGORITHM_HAND_RECOGNITION, 1, 1-1) else -1))))))))

Output Result: As shown, it can acquire the count and name of Gesture ID 1 in the frame, and the coordinate position of the first detected Gesture ID 1.

Interface Diagram

11.Instance Segmentation Blocks

11.1 Instance Segmentation - Output Relevant Data

Under the Instance Segmentation function, HUSKYLENS can recognize the class of objects in an image and mark the outline of each object. You can use a program to print the total number of instances recognized by HUSKYLENS, the instance closest to the center, and the Name, ID, center X/Y coordinates, Width, and Height of a specified ID instance.

Example Program:

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(ALGORITHM_SEGMENT)
line1=u_gui.draw_text(text="",x=0,y=0,font_size=15, color="#0000FF")
line2=u_gui.draw_text(text="",x=0,y=40,font_size=15, color="#0000FF")
line3=u_gui.draw_text(text="",x=0,y=80,font_size=15, color="#0000FF")
line4=u_gui.draw_text(text="",x=0,y=120,font_size=15, color="#0000FF")
while True:
    huskylens.getResult(ALGORITHM_SEGMENT)
    if (huskylens.available(ALGORITHM_SEGMENT)):
        line1.config(text=(str("Instances count:") + str((huskylens.getCachedResultNum(ALGORITHM_SEGMENT)))))
        line2.config(text=(str("Center instance ID:") + str((huskylens.getCachedCenterResult(ALGORITHM_SEGMENT).ID if huskylens.getCachedCenterResult(ALGORITHM_SEGMENT) else -1))))
        line3.config(text="First instance ID:")
        line4.config(text=(str("Coords:") + str((str((huskylens.getCachedResultByIndex(ALGORITHM_SEGMENT, 1 - 1).xCenter if huskylens.getCachedResultByIndex(ALGORITHM_SEGMENT, 1 - 1) else -1)) + str((str(",") + str((huskylens.getCachedResultByIndex(ALGORITHM_SEGMENT, 1 - 1).yCenter if huskylens.getCachedResultByIndex(ALGORITHM_SEGMENT, 1 - 1) else -1))))))))

Click 'Run' in Mind+ and wait for the program to upload. HuskyLens 2 will automatically enter the Instance Segmentation function. Point the HuskyLens 2 camera at the instance in the frame to learn it. For detailed instructions on how to learn instances, refer to: HUSKYLENS 2 WIKI

Once learning is complete, aim at the learned instance and observe the output results on the screen.

**Output Result: **After the program is successfully uploaded, HUSKYLENS 2 will automatically switch to the Instance Segmentation function. Point HUSKYLENS 2 at the object to be recognized (must be one of the 80 classes) and observe the number of recognized instances, the ID of a specific instance, its center point coordinates, and other data.

Interface Diagram

11.2 Get Data for a Specific Instance

Under the Instance Segmentation function, after HUSKYLENS 2 learns, it can acquire data for a specific instance in the frame. For example, it can determine if a learned instance is in the frame, the name of the specified instance, the count of similar instances, and when multiple instances of the same class appear, it can be set to acquire parameters for a specific one, including Name, X/Y coordinates, Width, and Height.

Example Program:

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(ALGORITHM_SEGMENT)
line1=u_gui.draw_text(text="",x=0,y=0,font_size=15, color="#0000FF")
line2=u_gui.draw_text(text="",x=0,y=40,font_size=15, color="#0000FF")
line3=u_gui.draw_text(text="",x=0,y=80,font_size=15, color="#0000FF")
line4=u_gui.draw_text(text="",x=0,y=120,font_size=15, color="#0000FF")
while True:
    huskylens.getResult(ALGORITHM_SEGMENT)
    if (huskylens.available(ALGORITHM_SEGMENT)):
        line1.config(text=(str("Instances ID1 count:") + str((huskylens.getCachedResultNumByID(ALGORITHM_SEGMENT, 1)))))
        line2.config(text=(str("Instance ID1 name:") + str((huskylens.getCachedResultByID(ALGORITHM_SEGMENT, 1).name if huskylens.getCachedResultByID(ALGORITHM_SEGMENT, 1) else -1))))
        line3.config(text="First instance ID1:")
        line4.config(text=(str("Coords:") + str((str((huskylens.getCachedIndexResultByID(ALGORITHM_SEGMENT, 1, 1-1).xCenter if huskylens.getCachedIndexResultByID(ALGORITHM_SEGMENT, 1, 1-1) else -1)) + str((str(",") + str((huskylens.getCachedIndexResultByID(ALGORITHM_SEGMENT, 1, 1-1).yCenter if huskylens.getCachedIndexResultByID(ALGORITHM_SEGMENT, 1, 1-1) else -1))))))))

**Output Result: **When multiple instances of the same class appear in the frame, it can recognize the count of instances of that class and output the specified instance's ID, center point coordinates, and other related data.

Interface Diagram

12.Pose Recognition Blocks

12.1 Recognize Human Pose and Output Relevant Data

Through the example program below, it's possible to recognize human poses within the HuskyLens 2 field of view and acquire pose-related data. Readable data includes: the total number of bodies detected, the total count of learned pose IDs; the ID of the human pose closest to the center of the HuskyLens 2 camera frame, and the ID of the first detected pose.

Example Program:

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(ALGORITHM_POSE_RECOGNITION)
line1=u_gui.draw_text(text="",x=0,y=0,font_size=15, color="#0000FF")
line2=u_gui.draw_text(text="",x=0,y=40,font_size=15, color="#0000FF")
line3=u_gui.draw_text(text="",x=0,y=80,font_size=15, color="#0000FF")
line4=u_gui.draw_text(text="",x=0,y=120,font_size=15, color="#0000FF")
while True:
    huskylens.getResult(ALGORITHM_POSE_RECOGNITION)
    if (huskylens.available(ALGORITHM_POSE_RECOGNITION)):
        line1.config(text=(str("Body count:") + str((huskylens.getCachedResultNum(ALGORITHM_POSE_RECOGNITION)))))
        line2.config(text=(str("Learned body count:") + str((huskylens.getCachedResultMaxID(ALGORITHM_POSE_RECOGNITION)))))
        line3.config(text=(str("Center body ID:") + str(huskylens.getCachedCenterResult(ALGORITHM_POSE_RECOGNITION).ID if huskylens.getCachedCenterResult(ALGORITHM_POSE_RECOGNITION) else -1)))
        line4.config(text=(str("First body ID:") + str(huskylens.getCachedResultByIndex(ALGORITHM_POSE_RECOGNITION, 1-1).ID if huskylens.getCachedResultByIndex(ALGORITHM_POSE_RECOGNITION, 1-1) else -1)))

Click 'Run' in Mind+ and wait for the program to upload.

Enter the Pose Recognition function on HuskyLens 2. Point the HuskyLens 2 camera at the human pose in the frame to learn it. For detailed instructions on how to learn poses, refer to:HUSKYLENS 2 WIKI

Once learning is complete, aim at the pose and observe the output results on the M10 screen.

**Output Result: **It can output the total number of bodies detected and the total count of learned pose IDs; it can output the specified human pose ID. Learned human poses are assigned IDs in learning order, while unlearned poses have an ID of 0.

Interface Diagram

12.2 Get Data for a Specific Pose in the Frame

It can acquire keypoint data for a specific human pose, such as its ID, Name, facial features, and body joints. Detailed data includes: Human ID, Human Pose Name, X and Y coordinates of the human's center point, Width, Height, X/Y coordinates for left/right eyes, ears, and nose, and X/Y coordinates for left/right shoulders, elbows, wrists, hips, knees, and ankles. For details, please see the Pose Recognition Blocks Description.

The example program below shows how to get the X/Y coordinate data for the nose, left shoulder, elbow, hip, knee, and ankle of the human body closest to the center of the camera frame. This data can also be obtained for unlearned human poses.

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(ALGORITHM_POSE_RECOGNITION)
line1=u_gui.draw_text(text="",x=0,y=0,font_size=12, color="#0000FF")
line2=u_gui.draw_text(text="",x=0,y=40,font_size=12, color="#0000FF")
line3=u_gui.draw_text(text="",x=0,y=80,font_size=12, color="#0000FF")
line4=u_gui.draw_text(text="",x=0,y=120,font_size=12, color="#0000FF")
line5=u_gui.draw_text(text="",x=0,y=160,font_size=12, color="#0000FF")
line6=u_gui.draw_text(text="",x=0,y=200,font_size=12, color="#0000FF")
line7=u_gui.draw_text(text="",x=0,y=240,font_size=12, color="#0000FF")
while True:
    huskylens.getResult(ALGORITHM_POSE_RECOGNITION)
    if (huskylens.available(ALGORITHM_POSE_RECOGNITION)):
        line1.config(text=(str("Center body ID:") + str(huskylens.getCachedCenterResult(ALGORITHM_POSE_RECOGNITION).ID if huskylens.getCachedCenterResult(ALGORITHM_POSE_RECOGNITION) else -1)))
        line2.config(text=(str("Nose coords:") + str((str(huskylens.getCachedCenterResult(ALGORITHM_POSE_RECOGNITION).nose_x if huskylens.getCachedCenterResult(ALGORITHM_POSE_RECOGNITION) else -1) + str((str(",") + str(huskylens.getCachedCenterResult(ALGORITHM_POSE_RECOGNITION).nose_y if huskylens.getCachedCenterResult(ALGORITHM_POSE_RECOGNITION) else -1)))))))
        line3.config(text=(str("Lshoulder coords:") + str((str(huskylens.getCachedCenterResult(ALGORITHM_POSE_RECOGNITION).lshoulder_x if huskylens.getCachedCenterResult(ALGORITHM_POSE_RECOGNITION) else -1) + str((str(",") + str(huskylens.getCachedCenterResult(ALGORITHM_POSE_RECOGNITION).lshoulder_y if huskylens.getCachedCenterResult(ALGORITHM_POSE_RECOGNITION) else -1)))))))
        line4.config(text=(str("Rshoulder coords:") + str((str(huskylens.getCachedCenterResult(ALGORITHM_POSE_RECOGNITION).rshoulder_x if huskylens.getCachedCenterResult(ALGORITHM_POSE_RECOGNITION) else -1) + str((str(",") + str(huskylens.getCachedCenterResult(ALGORITHM_POSE_RECOGNITION).rshoulder_y if huskylens.getCachedCenterResult(ALGORITHM_POSE_RECOGNITION) else -1)))))))
        line5.config(text=(str("Lelbow coords:") + str((str(huskylens.getCachedCenterResult(ALGORITHM_POSE_RECOGNITION).lelbow_x if huskylens.getCachedCenterResult(ALGORITHM_POSE_RECOGNITION) else -1) + str((str(",") + str(huskylens.getCachedCenterResult(ALGORITHM_POSE_RECOGNITION).lelbow_y if huskylens.getCachedCenterResult(ALGORITHM_POSE_RECOGNITION) else -1)))))))
        line6.config(text=(str("Relbow coords") + str((str(huskylens.getCachedCenterResult(ALGORITHM_POSE_RECOGNITION).relbow_x if huskylens.getCachedCenterResult(ALGORITHM_POSE_RECOGNITION) else -1) + str((str(",") + str(huskylens.getCachedCenterResult(ALGORITHM_POSE_RECOGNITION).relbow_y if huskylens.getCachedCenterResult(ALGORITHM_POSE_RECOGNITION) else -1)))))))
        line7.config(text=(str("Lknee coords:") + str((str(huskylens.getCachedCenterResult(ALGORITHM_POSE_RECOGNITION).lknee_x if huskylens.getCachedCenterResult(ALGORITHM_POSE_RECOGNITION) else -1) + str((str(",") + str(huskylens.getCachedCenterResult(ALGORITHM_POSE_RECOGNITION).lknee_y if huskylens.getCachedCenterResult(ALGORITHM_POSE_RECOGNITION) else -1)))))))

Output Result: As shown below, after running the program, the M10 (User provided K10, but document context is M10) screen displays the Human ID and the coordinate data for the nose and other key body points. Since this human pose has not been learned, the Human ID is 0.

Interface Diagram

In addition to the data above, more human pose data can be acquired. For example, determining if a pose with a specific ID is in the frame, the name of a specific pose, or the count of identical poses in the frame. When multiple identical poses appear, it's possible to specify and acquire parameters for one of them, including Name, X/Y coordinates, Width, Height, and key body point coordinate data.

Example Program:

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(ALGORITHM_POSE_RECOGNITION)
line1=u_gui.draw_text(text="",x=0,y=0,font_size=15, color="#0000FF")
line2=u_gui.draw_text(text="",x=0,y=40,font_size=15, color="#0000FF")
line3=u_gui.draw_text(text="",x=0,y=80,font_size=15, color="#0000FF")
line4=u_gui.draw_text(text="",x=0,y=120,font_size=15, color="#0000FF")
while True:
    huskylens.getResult(ALGORITHM_POSE_RECOGNITION)
    if (huskylens.available(ALGORITHM_POSE_RECOGNITION)):
        line1.config(text=(str("Body ID1 count:") + str((huskylens.getCachedResultNumByID(ALGORITHM_POSE_RECOGNITION, 1)))))
        line2.config(text=(str("Body ID1 name:") + str(huskylens.getCachedResultByID(ALGORITHM_POSE_RECOGNITION, 1).name if huskylens.getCachedResultByID(ALGORITHM_POSE_RECOGNITION, 1) else -1)))
        line3.config(text="First bodt ID1")
        line4.config(text=(str("Coords") + str((str(huskylens.getCachedIndexResultByID(ALGORITHM_POSE_RECOGNITION, 1, 1-1).xCenter if huskylens.getCachedIndexResultByID(ALGORITHM_POSE_RECOGNITION, 1, 1-1) else -1) + str((str(",") + str(huskylens.getCachedIndexResultByID(ALGORITHM_POSE_RECOGNITION, 1, 1-1).yCenter if huskylens.getCachedIndexResultByID(ALGORITHM_POSE_RECOGNITION, 1, 1-1) else -1)))))))


Output Result: As shown, it can acquire the count and name of Pose ID 1 in the frame, and the coordinate position of the first detected Pose ID 1.

Interface Diagram

13.License Recognition Blocks

13.1 License Recognition - Output Relevant Data

Under the License Recognition function, when a license plate appears on the HUSKYLENS 2 screen, it can be recognized and framed, and its relevant data acquired. Readable license plate data includes: the ID and Name of a specific license plate, the license plate content (license number), Width, Height, X and Y coordinates of the license plate's center point, the total number of license plates in the frame, and the total number of learned license plates in the frame.

Example Program:

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(ALGORITHM_LICENSE_RECOGNITION)
line1=u_gui.draw_text(text="",x=0,y=0,font_size=10, color="#0000FF")
line2=u_gui.draw_text(text="",x=0,y=40,font_size=10, color="#0000FF")
line3=u_gui.draw_text(text="",x=0,y=80,font_size=10, color="#0000FF")
line4=u_gui.draw_text(text="",x=0,y=120,font_size=10, color="#0000FF")
while True:
    huskylens.getResult(ALGORITHM_LICENSE_RECOGNITION)
    if (huskylens.available(ALGORITHM_LICENSE_RECOGNITION)):
        line1.config(text=(str("License plate count:") + str((huskylens.getCachedResultNum(ALGORITHM_LICENSE_RECOGNITION)))))
        line2.config(text=(str("Learned License plate count:") + str((huskylens.getCachedResultMaxID(ALGORITHM_LICENSE_RECOGNITION)))))
        line3.config(text=(str("Center QR License plate ID:") + str(huskylens.getCachedCenterResult(ALGORITHM_LICENSE_RECOGNITION).ID if huskylens.getCachedCenterResult(ALGORITHM_LICENSE_RECOGNITION) else -1)))
        line4.config(text=(str("1st License plate ID:") + str(huskylens.getCachedResultByIndex(ALGORITHM_LICENSE_RECOGNITION, 1-1).ID if huskylens.getCachedResultByIndex(ALGORITHM_LICENSE_RECOGNITION, 1-1) else -1)))

Click 'Run' in Mind+ and wait for the program to upload.

Point the HuskyLens 2 camera at the license plate in the frame to learn it. For detailed instructions on how to learn license plates, refer to HUSKYLENS 2 WIKI

Point the HUSKYLENS 2 camera at the license plate and observe the data on the UNIHIKER M10 screen.

Output Result: As shown below, three license plates are recognized in the frame, one of which is learned. The ID of the license plate near the center and the ID of the first detected license plate are both 0. (Unlearned license plates have an ID of 0; learned license plates output their corresponding ID.)

Interface Diagram

13.2 Output Data for a Specific ID License Plate

When multiple license plates of the same ID appear in the frame, the following example program can be used to gather relevant data for that ID's license plates.

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(ALGORITHM_LICENSE_RECOGNITION)
line1=u_gui.draw_text(text="",x=0,y=0,font_size=10, color="#0000FF")
line2=u_gui.draw_text(text="",x=0,y=40,font_size=10, color="#0000FF")
line3=u_gui.draw_text(text="",x=0,y=80,font_size=10, color="#0000FF")
line4=u_gui.draw_text(text="",x=0,y=120,font_size=10, color="#0000FF")
line5=u_gui.draw_text(text="",x=0,y=160,font_size=10, color="#0000FF")
while True:
    huskylens.getResult(ALGORITHM_LICENSE_RECOGNITION)
    if (huskylens.available(ALGORITHM_LICENSE_RECOGNITION)):
        line1.config(text=(str("License plate ID1 count:") + str((huskylens.getCachedResultNumByID(ALGORITHM_LICENSE_RECOGNITION, 1)))))
        line2.config(text="License plate ID1:")
        line3.config(text=(str("Contenr") + str(huskylens.getCachedResultByID(ALGORITHM_LICENSE_RECOGNITION, 1).content if huskylens.getCachedResultByID(ALGORITHM_LICENSE_RECOGNITION, 1) else -1)))
        line4.config(text="1st license plate ID1:")
        line4.config(text=(str("1st License plate ID:") + str((str((str(huskylens.getCachedIndexResultByID(ALGORITHM_LICENSE_RECOGNITION, 1, 1-1).xCenter if huskylens.getCachedIndexResultByID(ALGORITHM_LICENSE_RECOGNITION, 1, 1-1) else -1) + str(","))) + str(huskylens.getCachedIndexResultByID(ALGORITHM_LICENSE_RECOGNITION, 1, 1-1).yCenter if huskylens.getCachedIndexResultByID(ALGORITHM_LICENSE_RECOGNITION, 1, 1-1) else -1)))))

**Output Result: **As shown below, when multiple license plates of a specific ID appear in the frame, it can acquire the total count of that ID's license plates, the license plate number of a specific plate under that ID, and its coordinates, among other data.

Interface Diagram

14.Optical Char Recognition Blocks

##14.1 Optical Char Recognition - Output Relevant Data

Under the Optical Char Recognition function, HUSKYLENS 2 can recognize and frame the areas where text blocks appear in its field of view, and display the recognized text on the screen. The following example program can be used to count the total number of recognizable text blocks in the frame and acquire data for the text block closest to the crosshair. Readable data includes: the text block's ID, Name, Content, center X and Y coordinates, and the text block's Width and Height.

Example Program:

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(ALGORITHM_OCR_RECOGNITION)
line1=u_gui.draw_text(text="",x=0,y=0,font_size=15, color="#0000FF")
line2=u_gui.draw_text(text="",x=0,y=40,font_size=15, color="#0000FF")
line3=u_gui.draw_text(text="",x=0,y=80,font_size=15, color="#0000FF")
line4=u_gui.draw_text(text="",x=0,y=120,font_size=15, color="#0000FF")
line5=u_gui.draw_text(text="",x=0,y=160,font_size=15, color="#0000FF")
line6=u_gui.draw_text(text="",x=0,y=200,font_size=15, color="#0000FF")
line7=u_gui.draw_text(text="",x=0,y=240,font_size=15, color="#0000FF")
line8=u_gui.draw_text(text="",x=0,y=320,font_size=15, color="#0000FF")
while True:
    huskylens.getResult(ALGORITHM_OCR_RECOGNITION)
    if (huskylens.available(ALGORITHM_OCR_RECOGNITION)):
        line1.config(text=(str("Text count:") + str((huskylens.getCachedResultNum(ALGORITHM_OCR_RECOGNITION)))))
        line2.config(text=(str("Center text ID:") + str(huskylens.getCachedCenterResult(ALGORITHM_OCR_RECOGNITION).ID if huskylens.getCachedCenterResult(ALGORITHM_OCR_RECOGNITION) else -1)))
        line3.config(text="Center text content:")
        line4.config(text=huskylens.getCachedCenterResult(ALGORITHM_OCR_RECOGNITION).content if huskylens.getCachedCenterResult(ALGORITHM_OCR_RECOGNITION) else -1)
        line5.config(text="Center text")
        line6.config(text=(str("Coords") + str((str(huskylens.getCachedCenterResult(ALGORITHM_OCR_RECOGNITION).xCenter if huskylens.getCachedCenterResult(ALGORITHM_OCR_RECOGNITION) else -1) + str((str(",") + str(huskylens.getCachedCenterResult(ALGORITHM_OCR_RECOGNITION).yCenter if huskylens.getCachedCenterResult(ALGORITHM_OCR_RECOGNITION) else -1)))))))
        line7.config(text="Center text")
        line8.config(text=(str("Width & height:") + str((str(huskylens.getCachedCenterResult(ALGORITHM_OCR_RECOGNITION).width if huskylens.getCachedCenterResult(ALGORITHM_OCR_RECOGNITION) else -1) + str((str(",") + str(huskylens.getCachedCenterResult(ALGORITHM_OCR_RECOGNITION).height if huskylens.getCachedCenterResult(ALGORITHM_OCR_RECOGNITION) else -1)))))))

Click 'Run' in Mind+ and wait for the program to upload.

Point the HUSKYLENS 2 camera at any optical characters and observe the results displayed on the M10 (User provided K10, but document context is M10) screen.

**Output Result: **As shown below, the default output ID for unlearned text blocks is 0.

Interface Diagram

You can aim it at a learned text block. For detailed instructions on how to learn optical characters, refer to: HUSKYLENS 2 WIKI

Output Result: As shown below, the output ID for a learned text block matches the ID displayed on the HUSKYLENS 2 screen.

Note: Under the Optical Char Recognition function, HUSKYLENS 2 can detect and frame all areas where text blocks appear in the frame, but it only recognizes the content of the single text block area where the crosshair is located and displays the text content at the top left of the frame.

Interface Diagram

15.Line Tracking Blocks

15.1 Recognize Intersections and Output Data

Under the Line Tracking function, HUSKYLENS 2 can mark the trajectory of the line in the frame and acquire the current line's length, angle, and X, Y components. When the line branches, it can get the number of branches at the intersection and the corresponding data for each branch, starting counterclockwise.

Example Program:

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(ALGORITHM_LINE_TRACKING)
line1=u_gui.draw_text(text="",x=0,y=0,font_size=15, color="#0000FF")
line2=u_gui.draw_text(text="",x=0,y=40,font_size=15, color="#0000FF")
line3=u_gui.draw_text(text="",x=0,y=80,font_size=15, color="#0000FF")
line4=u_gui.draw_text(text="",x=0,y=120,font_size=15, color="#0000FF")
line5=u_gui.draw_text(text="",x=0,y=160,font_size=15, color="#0000FF")
line6=u_gui.draw_text(text="",x=0,y=200,font_size=15, color="#0000FF")
line7=u_gui.draw_text(text="",x=0,y=240,font_size=15, color="#0000FF")
while True:
    huskylens.getResult(ALGORITHM_LINE_TRACKING)
    if (huskylens.available(ALGORITHM_LINE_TRACKING)):
        line1.config(text=(str("Route angle:") + str((huskylens.getCurrentBranch(ALGORITHM_LINE_TRACKING,'angle')))))
        line2.config(text=(str("Route length:") + str((huskylens.getCurrentBranch(ALGORITHM_LINE_TRACKING,'length')))))
        line3.config(text="X & Y component:")
        line4.config(text=(str((huskylens.getCurrentBranch(ALGORITHM_LINE_TRACKING,'xTarget'))) + str((str(",") + str((huskylens.getCurrentBranch(ALGORITHM_LINE_TRACKING,'yTarget')))))))
        line5.config(text=(str("Junction branch count:") + str((huskylens.getUpcomingBranchCount(ALGORITHM_LINE_TRACKING)))))
        line6.config(text="Counterclockwise Ist route")
        line7.config(text=(str("X component:") + str((huskylens.getBranch(ALGORITHM_LINE_TRACKING, 1-1,'xTarget')))))

**Output Result: **As shown, point the HUSKYLENS camera at a map with lines. Observe the UNIHIKER M10 screen output, which displays data such as the current line's length and angle. When the line has multiple branches, it can output the number of branches and the X-component data of a specified line branch, starting counterclockwise.

Interface Diagram

16.Face Emotion Recognition Blocks

16.1 Face Emotion Recognition - Output Relevant Data

Under the Face Emotion Recognition function, HUSKYLENS 2 can recognize 7 specific emotions: angry (ID 1), disgust (ID 2), fear (ID 3), happy (ID 4), neutral (ID 5), sad (ID 6), and surprise (ID 7). These emotions have already been learned by HUSKYLENS 2 at the factory, so users do not need to learn them manually again. For detailed function instructions, please see HUSKYLENS 2 WIKI

The example program below can be used to count the total number of all recognized emotions in the current HUSKYLENS 2 camera frame and output the ID of a specific emotion.

Example Program:

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(ALGORITHM_EMOTION_RECOGNITION)
line1=u_gui.draw_text(text="",x=0,y=0,font_size=20, color="#0000FF")
line2=u_gui.draw_text(text="",x=0,y=40,font_size=20, color="#0000FF")
line3=u_gui.draw_text(text="",x=0,y=80,font_size=20, color="#0000FF")
while True:
    huskylens.getResult(ALGORITHM_EMOTION_RECOGNITION)
    if (huskylens.available(ALGORITHM_EMOTION_RECOGNITION)):
        line1.config(text=(str("Faces count:") + str((huskylens.getCachedResultNum(ALGORITHM_EMOTION_RECOGNITION)))))
        line2.config(text=(str("Center face ID:") + str((huskylens.getCachedCenterResult(ALGORITHM_EMOTION_RECOGNITION).ID if huskylens.getCachedCenterResult(ALGORITHM_EMOTION_RECOGNITION) else -1))))
        line3.config(text=(str("Firsr face ID:") + str((huskylens.getCachedResultByIndex(ALGORITHM_EMOTION_RECOGNITION, 1-1).ID if huskylens.getCachedResultByIndex(ALGORITHM_EMOTION_RECOGNITION, 1-1) else -1))))

Click 'Run' in Mind+ and wait for the program to upload.

When any of the seven emotions above appear in the camera frame, observe the HUSKYLENS 2 screen. It will frame the emotion and display its ID, name, and confidence. At the same time, the UNIHIKER M10 (User provided K10, but document context is M10) screen will display the result data output by the program.

Output Result: As shown below, it outputs the specified emotion ID and the total number of emotions in the frame, as required by the program.

Interface Diagram

16.2 Get Data for a Specific Emotion in the Frame

When multiple emotions with the same ID appear in the frame, the following example program can be used to gather relevant data for that emotion ID.

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(ALGORITHM_EMOTION_RECOGNITION)
line1=u_gui.draw_text(text="",x=0,y=0,font_size=20, color="#0000FF")
line2=u_gui.draw_text(text="",x=0,y=40,font_size=20, color="#0000FF")
line3=u_gui.draw_text(text="",x=0,y=80,font_size=20, color="#0000FF")
while True:
    huskylens.getResult(ALGORITHM_EMOTION_RECOGNITION)
    if (huskylens.available(ALGORITHM_EMOTION_RECOGNITION)):
        line1.config(text=(str("Faces count:") + str((huskylens.getCachedResultNum(ALGORITHM_EMOTION_RECOGNITION)))))
        line2.config(text=(str("Center face ID:") + str((huskylens.getCachedCenterResult(ALGORITHM_EMOTION_RECOGNITION).ID if huskylens.getCachedCenterResult(ALGORITHM_EMOTION_RECOGNITION) else -1))))
        line3.config(text=(str("Firsr face ID:") + str((huskylens.getCachedResultByIndex(ALGORITHM_EMOTION_RECOGNITION, 1-1).ID if huskylens.getCachedResultByIndex(ALGORITHM_EMOTION_RECOGNITION, 1-1) else -1))))

Output Result: As shown below, when multiple emotions with a specific ID appear in the frame, it can acquire the total count, name, and coordinates of a specific emotion under that ID, among other data.

Interface Diagram

17.Tag recognition Blocks

17.1 Tag Recognition - Output Relevant Data

HUSKYLENS 2 can recognize AprilTag tags appearing in the frame. You can get data about detected tags through programming. Readable tag data includes: data for a specific tag (such as ID, content, width, height, and X/Y coordinates of the center point) and the total number of detected tags.

Example Program:

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(ALGORITHM_TAG_RECOGNITION)
line1=u_gui.draw_text(text="",x=0,y=0,font_size=15, color="#0000FF")
line2=u_gui.draw_text(text="",x=0,y=40,font_size=15, color="#0000FF")
line3=u_gui.draw_text(text="",x=0,y=80,font_size=15, color="#0000FF")
line4=u_gui.draw_text(text="",x=0,y=120,font_size=15, color="#0000FF")
while True:
    huskylens.getResult(ALGORITHM_TAG_RECOGNITION)
    if (huskylens.available(ALGORITHM_TAG_RECOGNITION)):
        line1.config(text=(str("Tag count:") + str((huskylens.getCachedResultNum(ALGORITHM_TAG_RECOGNITION)))))
        line2.config(text=(str("learned tag count:") + str((huskylens.getCachedResultMaxID(ALGORITHM_TAG_RECOGNITION)))))
        line3.config(text=(str("Center count ID:") + str((huskylens.getCachedCenterResult(ALGORITHM_TAG_RECOGNITION).ID if huskylens.getCachedCenterResult(ALGORITHM_TAG_RECOGNITION) else -1))))
        line4.config(text=(str("First tag ID:") + str((huskylens.getCachedResultByIndex(ALGORITHM_TAG_RECOGNITION, 1-1).ID if huskylens.getCachedResultByIndex(ALGORITHM_TAG_RECOGNITION, 1-1) else -1))))

Click 'Run' in Mind+ and wait for the program to upload.

Point the HuskyLens 2 camera at the tag in the frame to learn it. For detailed instructions on how to learn tags, refer to: Tag recognition function - Learn Tags.

Point the HUSKYLENS 2 camera at the tag and observe the results displayed on the M10 (User provided K10, but document context is M10) screen.

Output Result: As shown, it can output the number of detected tags (whether learned or not) and the specified tag ID. Unlearned tags have an ID of 0.

Interface Diagram

17.2 Get Data for a Specific Tag in the Frame

After HuskyLens 2 recognizes tags, it can acquire data for a specific tag in the frame. For example, it can determine if a tag with a specific ID is in the frame, get the count of tags with the same ID, and when multiple tags with the same ID appear, it can be set to acquire parameters for a specific one, including Name, Content, X/Y coordinates, Width, and Height.

Example Program:

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(ALGORITHM_TAG_RECOGNITION)
line1=u_gui.draw_text(text="",x=0,y=0,font_size=15, color="#0000FF")
line2=u_gui.draw_text(text="",x=0,y=40,font_size=15, color="#0000FF")
line3=u_gui.draw_text(text="",x=0,y=80,font_size=15, color="#0000FF")
line4=u_gui.draw_text(text="",x=0,y=120,font_size=15, color="#0000FF")
line5=u_gui.draw_text(text="",x=0,y=160,font_size=15, color="#0000FF")
while True:
    huskylens.getResult(ALGORITHM_TAG_RECOGNITION)
    if (huskylens.available(ALGORITHM_TAG_RECOGNITION)):
        line1.config(text=(str("Tag ID0 count:") + str((huskylens.getCachedResultNumByID(ALGORITHM_TAG_RECOGNITION, 0)))))
        line2.config(text="First ID0")
        line3.config(text=(str("Content:") + str((huskylens.getCachedIndexResultByID(ALGORITHM_TAG_RECOGNITION, 0, 1-1).content if huskylens.getCachedIndexResultByID(ALGORITHM_TAG_RECOGNITION, 0, 1-1) else -1))))
        line4.config(text="Second ID0")
        line5.config(text=(str("Coords:") + str((str((huskylens.getCachedIndexResultByID(ALGORITHM_TAG_RECOGNITION, 0, 2-1).xCenter if huskylens.getCachedIndexResultByID(ALGORITHM_TAG_RECOGNITION, 0, 2-1) else -1)) + str((str(",") + str((huskylens.getCachedIndexResultByID(ALGORITHM_TAG_RECOGNITION, 0, 2-1).yCenter if huskylens.getCachedIndexResultByID(ALGORITHM_TAG_RECOGNITION, 0, 2-1) else -1))))))))

**Output Result: **As shown, there are two unlearned tags (ID 0) in the frame. The first ID 0 tag is on the left, and its content is 10 The second ID 0 tag is on the right, and its coordinates are (318, 183).

Interface Diagram

18.QR Code Recognition Blocks

18.1 QR Code Recognition - Output Relevant Data

HUSKYLENS 2 can recognize QR codes appearing in the frame. You can get data about detected QR codes through programming. Readable QR code data includes: the total number of detected QR codes, the total number of learned QR codes, and data for a specific QR code, including its ID, Content, Width, Height, and X/Y coordinates of the center point.

Example Program:

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(ALGORITHM_QRCODE_RECOGNITION)
line1=u_gui.draw_text(text="",x=0,y=0,font_size=15, color="#0000FF")
line2=u_gui.draw_text(text="",x=0,y=40,font_size=15, color="#0000FF")
line3=u_gui.draw_text(text="",x=0,y=80,font_size=15, color="#0000FF")
line4=u_gui.draw_text(text="",x=0,y=120,font_size=15, color="#0000FF")
while True:
    huskylens.getResult(ALGORITHM_QRCODE_RECOGNITION)
    if (huskylens.available(ALGORITHM_QRCODE_RECOGNITION)):
        line1.config(text=(str("QR code count:") + str((huskylens.getCachedResultNum(ALGORITHM_QRCODE_RECOGNITION)))))
        line2.config(text=(str("Learned QR code count:") + str((huskylens.getCachedResultMaxID(ALGORITHM_QRCODE_RECOGNITION)))))
        line3.config(text=(str("Center QR code ID:") + str((huskylens.getCachedCenterResult(ALGORITHM_QRCODE_RECOGNITION).ID if huskylens.getCachedCenterResult(ALGORITHM_QRCODE_RECOGNITION) else -1))))
        line4.config(text=(str("1st QR code ID:") + str((huskylens.getCachedResultByIndex(ALGORITHM_QRCODE_RECOGNITION, 1-1).ID if huskylens.getCachedResultByIndex(ALGORITHM_QRCODE_RECOGNITION, 1-1) else -1))))

Click 'Run' in Mind+ and wait for the program to upload.

Point the HuskyLens 2 camera at the QR code in the frame to learn it. For detailed instructions on how to learn QR codes, refer to:HUSKYLENS 2 WIKI.

Point the HUSKYLENS 2 camera at the QR code and observe the results displayed on the UNIHIKER M10 screen.

Output Result: As shown, it can output the number of detected QR codes (whether learned or not), the number of learned QR codes, and the specified QR code ID. Unlearned QR codes have an ID of 0.

Interface Diagram

18.2 Get Data for a Specific QR Code in the Frame

After HuskyLens 2 recognizes QR codes, it can acquire data for a specific QR code in the frame. For example, it can determine if a QR code with a specific ID is in the frame, get the count of QR codes with the same ID, and when multiple QR codes with the same ID appear, it can be set to acquire parameters for a specific one, including Name, Content, X/Y coordinates, Width, and Height.

Example Program:

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(ALGORITHM_QRCODE_RECOGNITION)
line1=u_gui.draw_text(text="",x=0,y=0,font_size=15, color="#0000FF")
line2=u_gui.draw_text(text="",x=0,y=40,font_size=15, color="#0000FF")
line3=u_gui.draw_text(text="",x=0,y=80,font_size=15, color="#0000FF")
line4=u_gui.draw_text(text="",x=0,y=120,font_size=15, color="#0000FF")
line5=u_gui.draw_text(text="",x=0,y=160,font_size=15, color="#0000FF")
while True:
    huskylens.getResult(ALGORITHM_QRCODE_RECOGNITION)
    if (huskylens.available(ALGORITHM_QRCODE_RECOGNITION)):
        line1.config(text=(str("QR code  ID0 count:") + str((huskylens.getCachedResultNumByID(ALGORITHM_QRCODE_RECOGNITION, 0)))))
        line2.config(text="1st QR code ID0:")
        line3.config(text=(str("Content:") + str((huskylens.getCachedIndexResultByID(ALGORITHM_QRCODE_RECOGNITION, 0, 1-1).content if huskylens.getCachedIndexResultByID(ALGORITHM_QRCODE_RECOGNITION, 0, 1-1) else -1))))
        line4.config(text="2nd QR ID0")
        line5.config(text=(str("Coords:") + str((str((huskylens.getCachedIndexResultByID(ALGORITHM_QRCODE_RECOGNITION, 0, 1-1).xCenter if huskylens.getCachedIndexResultByID(ALGORITHM_QRCODE_RECOGNITION, 0, 1-1) else -1)) + str((str(",") + str((huskylens.getCachedIndexResultByID(ALGORITHM_QRCODE_RECOGNITION, 0, 1-1).yCenter if huskylens.getCachedIndexResultByID(ALGORITHM_QRCODE_RECOGNITION, 0, 1-1) else -1))))))))

**Output Result: **As shown, there are two unlearned QR codes (ID 0) in the frame. The first ID 0 QR code is on the right, and its content is 'abc' . The second ID 0 QR code is on the left, and its coordinates are (188, 150).

Interface Diagram

19.Barcode Recognition Blocks

19.1 Barcode Recognition - Output Relevant Data

HUSKYLENS 2 can recognize barcodes appearing in the frame. You can get data about detected barcodes through programming. Readable barcode data includes: the total number of detected barcodes, and data for a specific barcode, including its ID, Content, Width, Height, and X/Y coordinates of the center point.

Example Program:

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(ALGORITHM_BARCODE_RECOGNITION)
line1=u_gui.draw_text(text="",x=0,y=0,font_size=15, color="#0000FF")
line2=u_gui.draw_text(text="",x=0,y=40,font_size=15, color="#0000FF")
line3=u_gui.draw_text(text="",x=0,y=80,font_size=15, color="#0000FF")
line4=u_gui.draw_text(text="",x=0,y=120,font_size=15, color="#0000FF")
while True:
    huskylens.getResult(ALGORITHM_BARCODE_RECOGNITION)
    if (huskylens.available(ALGORITHM_BARCODE_RECOGNITION)):
        line1.config(text=(str("Barcode count:") + str((huskylens.getCachedResultNum(ALGORITHM_BARCODE_RECOGNITION)))))
        line2.config(text=(str("Learned barcode count:") + str((huskylens.getCachedResultMaxID(ALGORITHM_BARCODE_RECOGNITION)))))
        line3.config(text=(str("Center barcode ID:") + str((huskylens.getCachedCenterResult(ALGORITHM_BARCODE_RECOGNITION).ID if huskylens.getCachedCenterResult(ALGORITHM_BARCODE_RECOGNITION) else -1))))
        line4.config(text=(str("1st barcode ID:") + str((huskylens.getCachedResultByIndex(ALGORITHM_BARCODE_RECOGNITION, 1-1).ID if huskylens.getCachedResultByIndex(ALGORITHM_BARCODE_RECOGNITION, 1-1) else -1))))

Click 'Run' in Mind+ and wait for the program to upload.

Point the HuskyLens 2 camera at the barcode in the frame to learn it. For detailed instructions on how to learn barcodes, refer to: HUSKYLENS 2 WIKI

Point the HUSKYLENS 2 camera at the barcode and observe the results displayed on the M10 (User provided K10, but document context is M10) screen.

Output Result: As shown, it can output the number of detected barcodes (whether learned or not) and the specified barcode ID. Unlearned barcodes have an ID of 0.

Interface Diagram

19.2 Get Data for a Specific Barcode in the Frame

After HuskyLens 2 recognizes barcodes, it can acquire data for a specific barcode in the frame. For example, it can determine if a barcode with a specific ID is in the frame, get the count of barcodes with the same ID, and when multiple barcodes with the same ID appear, it can be set to acquire parameters for a specific one, including Name, Content, X/Y coordinates, Width, and Height.

Example Program:

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(ALGORITHM_BARCODE_RECOGNITION)
line1=u_gui.draw_text(text="",x=0,y=0,font_size=15, color="#0000FF")
line2=u_gui.draw_text(text="",x=0,y=40,font_size=15, color="#0000FF")
line3=u_gui.draw_text(text="",x=0,y=80,font_size=15, color="#0000FF")
line4=u_gui.draw_text(text="",x=0,y=120,font_size=15, color="#0000FF")
line5=u_gui.draw_text(text="",x=0,y=160,font_size=15, color="#0000FF")
while True:
    huskylens.getResult(ALGORITHM_BARCODE_RECOGNITION)
    if (huskylens.available(ALGORITHM_BARCODE_RECOGNITION)):
        line1.config(text=(str("Barcode  ID0 count:") + str((huskylens.getCachedResultNumByID(ALGORITHM_BARCODE_RECOGNITION, 0)))))
        line2.config(text="1th barcode ID0")
        line3.config(text=(str("Content;") + str(huskylens.getCachedIndexResultByID(ALGORITHM_BARCODE_RECOGNITION, 0, 1-1).content if huskylens.getCachedIndexResultByID(ALGORITHM_BARCODE_RECOGNITION, 0, 1-1) else -1)))
        line4.config(text="2nd barcode ID0")
        line4.config(text=(str("Coords:") + str((str(huskylens.getCachedIndexResultByID(ALGORITHM_BARCODE_RECOGNITION, 0, 1-1).xCenter if huskylens.getCachedIndexResultByID(ALGORITHM_BARCODE_RECOGNITION, 0, 1-1) else -1) + str((str(",") + str(huskylens.getCachedIndexResultByID(ALGORITHM_BARCODE_RECOGNITION, 0, 1-1).yCenter if huskylens.getCachedIndexResultByID(ALGORITHM_BARCODE_RECOGNITION, 0, 1-1) else -1)))))))

Output Result: As shown, there are two unlearned barcodes (ID 0) in the frame. The first ID 0 barcode is at the bottom right, and its content is '23'. The second ID 0 barcode is the top one, and its coordinates are (391, 345).

Interface Diagram

20.Self-Trained Model

In addition to the visual recognition functions built into HUSKYLENS 2, users can also train their own models and deploy them to HUSKYLENS 2 to create their own unique visual recognition projects. To use this feature, please see the Deploy Custom-Trained Models

Note: Please ensure the current HUSKYLENS 2 firmware is version 1.15 or above. Firmware Update Tutorial)

The following will use a self-trained object detection model "Product Recognition" as an example to introduce how the UNIHIKER M10 reads the recognition results of the HUSKYLENS 2 self-trained model.

20.1 Detect Target and Output Relevant Data

HUSKYLENS 2 can recognize target objects appearing in the frame (it can only recognize object classes that were in the dataset when the user trained the model). You can get data about detected target objects through programming.

After the self-trained model is deployed, it can recognize objects of the same class that the user has labeled. Like other functions, the self-trained model application supports learning the detected objects. When a target object is framed, press the A button to learn the object and assign it an ID.

Readable target data includes: the total number of detected targets, the total number of learned targets, and data for a specific target, including its ID, Name, Width, Height, and X/Y coordinates of the center point.

Example Program:

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(129);
line1=u_gui.draw_text(text="",x=0,y=0,font_size=15, color="#0000FF")
line2=u_gui.draw_text(text="",x=0,y=40,font_size=15, color="#0000FF")
line3=u_gui.draw_text(text="",x=0,y=80,font_size=15, color="#0000FF")
line4=u_gui.draw_text(text="",x=0,y=120,font_size=15, color="#0000FF")
while True:
    huskylens.getResult(129);
    if (huskylens.available(129)):
        line1.config(text=(str("Objects count:") + str((huskylens.getCachedResultNum(129)))))
        line2.config(text=(str("Learned objects count:") + str((huskylens.getCachedResultMaxID(129)))))
        line3.config(text=(str("Center object ID:") + str((huskylens.getCachedCenterResult(129).ID if huskylens.getCachedCenterResult(129) else -1))))
        line4.config(text=(str("1st object ID:") + str(huskylens.getCachedIndexResultByID(129, 1, 1-1).name if huskylens.getCachedIndexResultByID(129, 1, 1-1) else -1)))

Output Result: As shown, 2target objects are recognized in the frame, one of which are learned. The target closest to the center is ID 0 (mouse), and the first recognized target is named apple. Objects learned in HUSKYLENS 2 are assigned IDs in order; unlearned objects have an ID of 0.

Interface Diagram

20.2 Get Data for a Specific Target in the Frame

After HuskyLens 2 recognizes target objects, it can acquire data for a specific target in the frame. For example, it can determine if a target with a specific ID is in the frame, get the count of targets with the same ID, and when multiple targets with the same ID appear, it can be set to acquire parameters for a specific one, including ID, Name, X/Y coordinates, Width, and Height.

Example Program:

from unihiker import GUI
from pinpong.board import Board
from dfrobot_huskylensv2 import *


u_gui=GUI()
Board().begin()
huskylens = HuskylensV2_I2C()
huskylens.knock()
huskylens.switchAlgorithm(129);
line1=u_gui.draw_text(text="",x=0,y=0,font_size=15, color="#0000FF")
line2=u_gui.draw_text(text="",x=0,y=40,font_size=15, color="#0000FF")
line3=u_gui.draw_text(text="",x=0,y=80,font_size=15, color="#0000FF")
line4=u_gui.draw_text(text="",x=0,y=120,font_size=15, color="#0000FF")
line5=u_gui.draw_text(text="",x=0,y=160,font_size=15, color="#0000FF")
line6=u_gui.draw_text(text="",x=0,y=200,font_size=15, color="#0000FF")
while True:
    huskylens.getResult(129);
    if (huskylens.available(129)):
        if ((huskylens.getCachedResultByID(129, 1) is not None)):
            line1.config(text=(str("Object ID1 count:") + str((huskylens.getCachedResultNumByID(129, 1)))))
            line2.config(text=(str("Object ID1 name:") + str((huskylens.getCachedResultByID(129, 1).name if huskylens.getCachedResultByID(129, 1) else -1))))
            line3.config(text="1st object ID1")
            line4.config(text=(str("Coords:") + str((str(huskylens.getCachedIndexResultByID(129, 1, 1-1).xCenter if huskylens.getCachedIndexResultByID(129, 1, 1-1) else -1) + str((str(",") + str(huskylens.getCachedIndexResultByID(129, 1, 1-1).yCenter if huskylens.getCachedIndexResultByID(129, 1, 1-1) else -1)))))))
            line5.config(text="2nd object ID1")
            line6.config(text=(str("Coords:") + str((str(huskylens.getCachedIndexResultByID(129, 1, 2-1).xCenter if huskylens.getCachedIndexResultByID(129, 1, 2-1) else -1) + str((str(",") + str(huskylens.getCachedIndexResultByID(129, 1, 2-1).yCenter if huskylens.getCachedIndexResultByID(129, 1, 2-1) else -1)))))))

Output Result: As shown, there are 2 ID 1 target objects in the frame, named 'person' . The coordinates of the first recognized ID 3 target are (319, 237).

Interface Diagram